157 research outputs found
Extending Context Window of Large Language Models via Positional Interpolation
We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure.Comment: Fix template issue
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer
Transformer architecture has shown impressive performance in multiple
research domains and has become the backbone of many neural network models.
However, there is limited understanding on how it works. In particular, with a
simple predictive loss, how the representation emerges from the gradient
\emph{training dynamics} remains a mystery. In this paper, for 1-layer
transformer with one self-attention layer plus one decoder layer, we analyze
its SGD training dynamics for the task of next token prediction in a
mathematically rigorous manner. We open the black box of the dynamic process of
how the self-attention layer combines input tokens, and reveal the nature of
underlying inductive bias. More specifically, with the assumption (a) no
positional encoding, (b) long input sequence, and (c) the decoder layer learns
faster than the self-attention layer, we prove that self-attention acts as a
\emph{discriminative scanning algorithm}: starting from uniform attention, it
gradually attends more to distinct key tokens for a specific next token to be
predicted, and pays less attention to common key tokens that occur across
different next tokens. Among distinct tokens, it progressively drops attention
weights, following the order of low to high co-occurrence between the key and
the query token in the training set. Interestingly, this procedure does not
lead to winner-takes-all, but decelerates due to a \emph{phase transition} that
is controllable by the learning rates of the two layers, leaving (almost) fixed
token combination. We verify this \textbf{\emph{scan and snap}} dynamics on
synthetic and real-world data (WikiText).Comment: Fix minor issues in the proofs and figures. Update figures to reflect
the main conclusions more accuratel
- …